The Polygraph Place

Thanks for stopping by our bulletin board.
Please take just a moment to register so you can post your own questions
and reply to topics. It is free and takes only a minute to register. Just click on the register link


  Polygraph Place Bulletin Board
  Professional Issues - Private Forum for Examiners ONLY
  OSS-3 Army MGQT validation data - APA poster

Post New Topic  Post A Reply
profile | register | preferences | faq | search

next newest topic | next oldest topic
Author Topic:   OSS-3 Army MGQT validation data - APA poster
rnelson
Member
posted 07-29-2009 05:56 AM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
Coming soon to a Polygraph conference near you...

Here are some charts illustrating the results of a validation experiment using OSS-3 with a sample of confirmed Army MGQT cases from the DoDPI-2002 archive.

The sample was originally used in a validation experiment using OSS-1 (Krapohl & Norris, 2000).

It is also now possible to improve on this.

Below are the PPV and NPV.


OSS-3 appears to be performing as well or better than the original examiners.

Part of the improvement is a new, still experimental, addition involving an attempt to create an automated and algorithmic approach to artifact rejection. The results above were obtained without any manual or visual review of the data.

At the present time, our algorithms can do complex measurement and complex math with perfect/automated reliability, but humans can do a better job evaluating the interpretable or usable quality of the data. In reality, data quality and artifact rejection tasks can also become measurement and math problems.

Here is a sneak-preview on that.
http://www.raymondnelson.us/images/Automated_Artifact_Detection_preview_6-24-09.pdf

.02

r


------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


[This message has been edited by rnelson (edited 07-29-2009).]

IP: Logged

skipwebb
Member
posted 07-30-2009 12:36 PM     Click Here to See the Profile for skipwebb   Click Here to Email skipwebb     Edit/Delete Message
Ray, Is this the same OSS-3 version or a new one. The results are really impressive!!!! Great work.

IP: Logged

rnelson
Member
posted 07-31-2009 05:46 AM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
The results shown by the lighter blue bars, second to the right in each group, are the same OSS-3 version as always.

OSS-3 is performing as expected with the Army cases.

The results shown by the pukey-salmon colored bars, on the right end of each group, were obtained using OSS-3 after replacing the Test of Proportions with the results of another project, intended to automate the detection of artifacts.

In this experiment, the artifact rejection algorithm was pointed only at the pneumos. The artifact rejection method uses an transformation method that I first used in the RI algorithm project, and later realized could be coupled with the test of proportions.

If you look at the .pdf link you'll notice the algorithm doesn't agree with us on every detail, but correctly classifies both the non-CM case and the CMs. At present, at present if the data are messy and widely varied, the normal looking segments may be regarded as the mathematical outliers.

Until we are sure that an an algorithm can ID data quality and artifacts as reliably or more reliably then a human, the ethical and responsible thing to do is not expect the algorithm to make the decision. Instead, we make the decision, and perhaps let the algorithm do the math and give us the level of significance.

Same with artifacts, let the algorithm do the math, and tell us the level of significance regarding 2 things: 1) whether any of the measured values are outside normal limits compared to the others, and 2) whether the artifact events fit a pattern of activity that we would expect if they occurred due to random chance alone. Just like the manual Test of Proportions in the present OSS-3, when the p-value is significant for non-random activity, that becomes a mathematical basis for concluding the activity was systematic. The only thing new is a method for comparing the measurements themselves and determining whether any of them appear to be outlier values indicative of artifact events.

There is obviously a lot remaining to be done. This is a project that has been on a slow trajectory for nearly two years. The results is about a 1 percentage point in decision accuracy. The cost of this is a slight increase in INCs, as some cases won't be scored due to artifacts.

The difference itself will not be statistically significant. However, even non-significant incremental improvements, if we stack up enough of them, may become important.

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

All times are PT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply
Hop to:

Contact Us | The Polygraph Place

Copyright 1999-2008. WordNet Solutions Inc. All Rights Reserved

Powered by: Ultimate Bulletin Board, Version 5.39c
© Infopop Corporation (formerly Madrona Park, Inc.), 1998 - 1999.